Apologies, but no results were found.

Research Summary
Intelligent traffic & robotics system; 2D & 3D computer vision; Environment perception and reconstruction; Point cloud processing; Defect and anomaly detection; Digital twins; AI models, with applications in Autonomous Driving, Infrastructure Monitoring, Smart Cities, and Healthcare
Courses & Teaching
CMPE 246 Computer Engineering Design Studio
ENGR 359 Microcomputer Engineering
APSC 258 Applications of Engineering Design
CMPE 401 Deep Learning for Engineers
ENGR 501 Deep and Reinforcement Learning for Engineers
Biography
Dr. Bai is deeply passionate about teaching, with a strong commitment to supporting students’ career development through hands-on learning, system-level thinking, and the integration of hardware and software in electrical and computer engineering. Dr. Bai’s work is also strongly motivated by real-world problem solving and adopts an application-oriented approach that bridges theory and practice across multidisciplinary domains, including intelligent transportation, robotics, healthcare, and smart infrastructure. Dr. Bai has participated in collaborative research and development projects with industry partners and the National Research Council of Canada (NRC), and these works have been supported through research fellowships funded by the NSERC Alliance and Mitacs Accelerate grants. In addition, Dr. Bai actively contributes to community service and Equity, Diversity, and Inclusion (EDI) initiatives that promote positive learning and working environments.
Degrees
Joint PhD, Electrical and Computer Engineering, McMaster University
PhD, MASc, Computer Science and Technology, Chongqing University of Posts and Telecommunications
BASc, Information Science and Technology (Software Engineering), Beijing University of Chemical Technology
Research Interests & Projects
Research, Development, and Application of Environment Perception and Localization Technologies for Intelligent Transportation and Robotics
Visual perception and localization constitute fundamental capabilities of intelligent transportation systems and robotic platforms, providing essential support for environment perception, mapping, and decision-making. In complex real-world environments, factors such as dynamic scene changes, object occlusions, viewpoint variability, and coupled geometric–semantic uncertainties continue to challenge the robustness and automation level of existing perception and localization frameworks, particularly in safety-critical applications.
This research investigates key technologies for intelligent environment perception oriented toward autonomous systems and robotics, focusing on the joint modeling of scene geometry, semantic understanding, and visual localization. The study integrates multi-view scene reconstruction, learning-based object detection, and multi-sensor perception, while extending these capabilities to AR-enhanced visualization and occlusion compensation in human-centered systems, including intelligent vehicle cockpits.
The research aims to advance robust, automated, and interpretable perception and localization frameworks for intelligent transportation systems and robotics, improving environmental awareness, operational safety, and human–machine interaction in complex dynamic environments.
Unleashing the Power of VLM and Knowledge graph for Autonomous Driving Maneuver Detection and Classification
Advanced driver-assistance systems (ADAS) and autonomous driving systems (ADS) critically depend on high-quality vehicle maneuver data for perception, prediction, and planning. However, existing maneuver datasets are often fragmented, costly to annotate, and limited in semantic consistency and interpretability, which restricts their scalability and downstream usability.
Recent trends in autonomous driving research emphasize data-centric AI, multimodal learning, and semantic-rich scene understanding, together with the increasing adoption of vision–language models (VLMs) and structured knowledge representations to reduce manual annotation effort and improve explainability. These developments encourage a transition from isolated maneuver labels toward more context-aware and causally grounded representations.
This research investigates a VLM-driven data and knowledge framework for autonomous driving scenario detection and classification, integrating multimodal maneuver data with structured metadata and knowledge graphs. The study focuses on scalable data collection, temporally coherent multimodal alignment, and VLM-based semantic annotation with human-in-the-loop refinement, as well as knowledge graph construction for maneuver semantics and context modeling.
The proposed research aims to establish a benchmark-level framework that improves the accuracy, efficiency, semantic richness, robustness, and interpretability of maneuver detection, thereby enhancing safety awareness and decision reliability in learning-based ADAS and ADS systems.
Digital Twin–Based Multimodal Defect Detection and Condition Monitoring for Infrastructure Assets
Critical assets such as bridges, roads, tunnels, pipelines, industrial gears, and building infrastructure experience gradual degradation due to aging, environmental exposure, and operational stress. Traditional inspection approaches are largely manual and periodic, making early defect and anomaly detection difficult and limiting effective lifecycle management.
Recent trends in smart infrastructure and intelligent maintenance emphasize data-driven defect detection, data analysis and mining, and the fusion of image data with multiple sensor signals. In parallel, digital twin technologies and AR/VR-based three-dimensional visualization are increasingly adopted to integrate monitoring data and analytical models, enabling immersive, intuitive, and continuous condition assessment and decision support.
This research investigates a defect and anomaly detection framework that combines visual inspection data with multi-sensor signals within a digital twin–enabled and AR/VR-enhanced monitoring platform. The study focuses on multimodal data analysis, anomaly modeling, and 3D spatial visualization and interaction for scalable condition assessment across diverse assets, including bridges, roads, industrial machinery, and building components.
This research aims to improve early defect detection, enhance maintenance efficiency, and support predictive lifecycle management and immersive human–machine interaction for intelligent infrastructure and industrial systems.
National Research Council Canada’s Aging in Place Program: An LLM-based Pipeline for Preclinical Staging of Alzheimer’s Disease in Intelligent Healthcare
Alzheimer’s disease (AD) develops over a long preclinical stage during which subtle cognitive decline is difficult to detect using conventional diagnostic methods that are often invasive, costly, and insensitive to early symptoms. Recent trends in aging-in-place healthcare and digital biomarkers highlight spontaneous speech as a non-invasive and scalable source of cognitive indicators, while advances in large language models (LLMs) offer new opportunities for language-based analysis.
This research, under National Research Council Canada’s Aging in Place Program, investigates an LLM-based pipeline for preclinical staging of Alzheimer’s disease, focusing on modeling fine-grained linguistic and paralinguistic anomalies that are typically overlooked by general-purpose LLMs. By enhancing weak pathological cues and integrating them into structured language representations, the research aims to improve early-stage AD detection and interpretability.
This research supports early intervention and continuous cognitive monitoring, contributing to explainable, scalable, and non-invasive intelligent healthcare solutions with significant clinical and societal impact.